So welcome to the Tuesday afternoon session pattern analysis.
Yesterday we brought up a few issues.
I will discuss these on Monday because I haven't had the time to think deeply about the problems
we came up with yesterday.
For me it's important that you now more and more get the feeling that all the theoretical
concepts we are talking about are very important for applications.
And yesterday we have seen two examples.
One example has applied LDA, linear discriminant analysis, and a classifier based on the concept
of LDA.
And was the Adidas OneSho, so a very practical system that is on the market, making use of
these systems.
You also have to know that 30 years ago the systems that were used by the postal offices
to read the addresses and the zip codes, they were using also LDA based classifiers 30 years
ago.
So the systems filled a whole room and not just the heel of a running shoe.
So that's an old concept but still applied to current systems and current problems that
are important for our practical life.
Then the second application was the PCA application for shape modeling where we have said, and
I think I haven't explained this properly yesterday, I noticed that while I was explaining
it that the slides are not yet in a state where I can say it's plausible what's going
on there.
I mean what does PCA do?
It takes these points and computes the principal axis.
It computes the principal axis.
That's the 1D axis that shows the highest spread of the points if you project them down
on this.
And here you get the second axis, that's the second projection direction where you get
the second highest spread.
And if you have high dimensional spaces you can decompose and you can find a coordinate
system in a way that all the points that you have can be mapped in terms of the spread
objective function successively.
So highest spread, second highest spread, third highest spread and so on.
And now you can use these vectors to generate new feature vectors saying these are the axis
and then you can say I want to have this weighted by A1, I want to have this direction weighted
by A2 and so on.
And then you can generate a point here, a new feature point by using linear combinations
of your eigenvalues.
And that's exactly what we did basically here for the representation of shapes.
We have used the eigenvectors and linear combination of the eigenvectors and have here generated
so-called eigen shapes.
So we encode here shapes in terms of the coordinates of our principal components.
And this can be used for shape representation in medical engineering applications.
This can also be used for face modeling, faces, human faces or even for human body shapes.
So a very powerful technique and I encourage you to search the web on eigen representations
of patterns and you will find tons of nice applications of that.
Very basic technology, very old, very subtle technology applied to very challenging and
practical problems.
And today we will continue looking at linear decision boundaries and different ways to
compute decision boundaries.
And basically the whole story will end up with a message, you have two classes and you
Presenters
Zugänglich über
Offener Zugang
Dauer
00:41:12 Min
Aufnahmedatum
2009-05-12
Hochgeladen am
2017-07-05 12:41:07
Sprache
en-US